76 research outputs found

    Video-driven Neural Physically-based Facial Asset for Production

    Full text link
    Production-level workflows for producing convincing 3D dynamic human faces have long relied on an assortment of labor-intensive tools for geometry and texture generation, motion capture and rigging, and expression synthesis. Recent neural approaches automate individual components but the corresponding latent representations cannot provide artists with explicit controls as in conventional tools. In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets. For data collection, we construct a hybrid multiview-photometric capture stage, coupling with ultra-fast video cameras to obtain raw 3D facial assets. We then set out to model the facial expression, geometry and physically-based textures using separate VAEs where we impose a global MLP based expression mapping across the latent spaces of respective networks, to preserve characteristics across respective attributes. We also model the delta information as wrinkle maps for the physically-based textures, achieving high-quality 4K dynamic textures. We demonstrate our approach in high-fidelity performer-specific facial capture and cross-identity facial motion retargeting. In addition, our multi-VAE-based neural asset, along with the fast adaptation schemes, can also be deployed to handle in-the-wild videos. Besides, we motivate the utility of our explicit facial disentangling strategy by providing various promising physically-based editing results with high realism. Comprehensive experiments show that our technique provides higher accuracy and visual fidelity than previous video-driven facial reconstruction and animation methods.Comment: For project page, see https://sites.google.com/view/npfa/ Notice: You may not copy, reproduce, distribute, publish, display, perform, modify, create derivative works, transmit, or in any way exploit any such content, nor may you distribute any part of this content over any network, including a local area network, sell or offer it for sale, or use such content to construct any kind of databas

    Mapping the 2021 October Flood Event in the Subsiding Taiyuan Basin By Multi-Temporal SAR Data

    Get PDF
    A flood event induced by heavy rainfall hit the Taiyuan basin in north China in early October of 2021. In this study, we map the flood event process using the multi-temporal synthetic aperture radar (SAR) images acquired by Sentinel-1. First, we develop a spatiotemporal filter based on low-rank tensor approximation (STF-LRTA) for removing the speckle noise in SAR images. Next, we employ the classic log-ratio change indicator and the minimum error threshold algorithm to characterize the flood using the filtered images. Finally, we relate the flood inundation to the land subsidence in the Taiyuan basin by jointly analyzing the multi-temporal SAR change detection results and interferometric SAR (InSAR) time-series measurements (pre-flood). The validation experiments compare the proposed filter with the Refined-Lee filter, Gamma filter, and an SHPS-based multi-temporal SAR filter. The results demonstrate the effectiveness and advantage of the proposed STF-LRTA method in SAR despeckling and detail preservation, and the applicability to change scenes. The joint analyses reveal that land subsidence might be an important contributor to the flood event, and the flood recession process linearly correlates with time and subsidence magnitude.This work was financially supported by the National Natural Science Foundation of China (grant numbers 41904001 and 41774006), the China Postdoctoral Science Foundation (grant number 2018M640733), the National Key Research and Development Program of China (grant number 2019YFC1509201), and the National Postdoctoral Program for Innovative Talents (grant number BX20180220)

    SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric Generator

    Full text link
    Recent years have seen growing interest in 3D human faces modelling due to its wide applications in digital human, character generation and animation. Existing approaches overwhelmingly emphasized on modeling the exterior shapes, textures and skin properties of faces, ignoring the inherent correlation between inner skeletal structures and appearance. In this paper, we present SCULPTOR, 3D face creations with Skeleton Consistency Using a Learned Parametric facial generaTOR, aiming to facilitate easy creation of both anatomically correct and visually convincing face models via a hybrid parametric-physical representation. At the core of SCULPTOR is LUCY, the first large-scale shape-skeleton face dataset in collaboration with plastic surgeons. Named after the fossils of one of the oldest known human ancestors, our LUCY dataset contains high-quality Computed Tomography (CT) scans of the complete human head before and after orthognathic surgeries, critical for evaluating surgery results. LUCY consists of 144 scans of 72 subjects (31 male and 41 female) where each subject has two CT scans taken pre- and post-orthognathic operations. Based on our LUCY dataset, we learn a novel skeleton consistent parametric facial generator, SCULPTOR, which can create the unique and nuanced facial features that help define a character and at the same time maintain physiological soundness. Our SCULPTOR jointly models the skull, face geometry and face appearance under a unified data-driven framework, by separating the depiction of a 3D face into shape blend shape, pose blend shape and facial expression blend shape. SCULPTOR preserves both anatomic correctness and visual realism in facial generation tasks compared with existing methods. Finally, we showcase the robustness and effectiveness of SCULPTOR in various fancy applications unseen before.Comment: 16 page, 13 fig

    Relightable Neural Human Assets from Multi-view Gradient Illuminations

    Full text link
    Human modeling and relighting are two fundamental problems in computer vision and graphics, where high-quality datasets can largely facilitate related research. However, most existing human datasets only provide multi-view human images captured under the same illumination. Although valuable for modeling tasks, they are not readily used in relighting problems. To promote research in both fields, in this paper, we present UltraStage, a new 3D human dataset that contains more than 2,000 high-quality human assets captured under both multi-view and multi-illumination settings. Specifically, for each example, we provide 32 surrounding views illuminated with one white light and two gradient illuminations. In addition to regular multi-view images, gradient illuminations help recover detailed surface normal and spatially-varying material maps, enabling various relighting applications. Inspired by recent advances in neural representation, we further interpret each example into a neural human asset which allows novel view synthesis under arbitrary lighting conditions. We show our neural human assets can achieve extremely high capture performance and are capable of representing fine details such as facial wrinkles and cloth folds. We also validate UltraStage in single image relighting tasks, training neural networks with virtual relighted data from neural assets and demonstrating realistic rendering improvements over prior arts. UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks. The dataset is available at https://miaoing.github.io/RNHA.Comment: Project page: https://miaoing.github.io/RNH

    Dynamic twisting and imaging of moir\'e crystals

    Full text link
    The electronic band structure is an intrinsic property of solid-state materials that is intimately connected to the crystalline arrangement of atoms. Moir\'e crystals, which emerge in twisted stacks of atomic layers, feature a band structure that can be continuously tuned by changing the twist angle between adjacent layers. This class of artificial materials blends the discrete nature of the moir\'e superlattice with intrinsic symmetries of the constituent materials, providing a versatile platform for investigation of correlated phenomena whose origins are rooted in the geometry of the superlattice, from insulating states at "magic angles" to flat bands in quasicrystals. Here we present a route to mechanically tune the twist angle of individual atomic layers with a precision of a fraction of a degree inside a scanning probe microscope, which enables continuous control of the electronic band structure in-situ. Using nanostructured rotor devices, we achieve the collective rotation of a single layer of atoms with minimal deformation of the crystalline lattice. In twisted bilayer graphene, we demonstrate nanoscale control of the moir\'e superlattice period via external rotations, as revealed using piezoresponse force microscopy. We also extend this methodology to create twistable boron nitride devices, which could enable dynamic control of the domain structure of moir\'e ferroelectrics. This approach provides a route for real-time manipulation of moir\'e materials, allowing for systematic exploration of the phase diagrams at multiple twist angles in a single device

    Engineering a mevalonate pathway in Halomonas bluephagenesis for the production of lycopene

    Get PDF
    IntroductionRed-colored lycopene has received remarkable attention in medicine because of its antioxidant properties for reducing the risks of many human cancers. However, the extraction of lycopene from natural hosts is limited. Moreover, the chemically synthesized lycopene raises safety concerns due to residual chemical reagents. Halomonas bluephagenesis is a versatile chassis for the production of fine chemicals because of its open growth property without sterilization.MethodsA heterologous mevalonate (MVA) pathway was introduced into H. bluephagenesis strain TD1.0 to engineer a bacterial host for lycopene production. A pTer7 plasmid mediating the expression of six MVA pathway genes under the control of a phage PMmp1 and an Escherichia coli Ptrc promoters and a pTer3 plasmid providing lycopene biosynthesis downstream genes derived from Streptomyces avermitilis were constructed and transformed into TD1.0. The production of lycopene in the engineered H. bluephagenesis was evaluated. Optimization of engineered bacteria was performed to increase lycopene yield.ResultsThe engineered TD1.0/pTer7-pTer3 produced lycopene at a maximum yield of 0.20 mg/g dried cell weight (DCW). Replacing downstream genes with those from S. lividans elevated the lycopene production to 0.70 mg/g DCW in the TD1.0/pTer7-pTer5 strain. Optimizing the PMmp1 promoter in plasmid pTer7 with a relatively weak Ptrc even increased the lycopene production to 1.22 mg/g DCW. However, the change in the Ptrc promoter in pTer7 with PMmp1 did not improve the yield of lycopene.ConclusionWe first engineered an H. bluephagenesis for the lycopene production. The co-optimization of downstream genes and promoters governing MVA pathway gene expressions can synergistically enhance the microbial overproduction of lycopene

    SY18ΔL60L: a new recombinant live attenuated African swine fever virus with protection against homologous challenge

    Get PDF
    IntroductionAfrican swine fever (ASF) is an acute and highly contagious disease and its pathogen, the African swine fever virus (ASFV), threatens the global pig industry. At present, management of ASF epidemic mainly relies on biological prevention and control methods. Moreover, due to the large genome of ASFV, only half of its genes have been characterized in terms of function.MethodsHere, we evaluated a previously uncharacterized viral gene, L60L. To assess the function of this gene, we constructed a deletion strain (SY18ΔL60L) by knocking out the L60L gene of the SY18 strain. To evaluate the growth characteristics and safety of the SY18ΔL60L, experiments were conducted on primary macrophages and pigs, respectively.ResultsThe results revealed that the growth trend of the recombinant strain was slower than that of the parent strain in vitro. Additionally, 3/5 (60%) pigs intramuscularly immunized with a 105 50% tissue culture infectious dose (TCID50) of SY18ΔL60L survived the 21-day observation period. The surviving pigs were able to protect against the homologous lethal strain SY18 and survive. Importantly, there were no obvious clinical symptoms or viremia.DiscussionThese results suggest that L60L could serve as a virulence- and replication-related gene. Moreover, the SY18ΔL60L strain represents a new recombinant live-attenuated ASFV that can be employed in the development of additional candidate vaccine strains and in the elucidation of the mechanisms associated with ASF infection
    • …
    corecore